AAAI AI-Alert for Jun 1, 2023
Get Ready for 3D-Printed Organs and a Knife That 'Smells' Tumors
To doctors and nurses working 75 years ago, when the UK's National Health Service was founded, a modern ward would be completely unrecognizable. Fast-forward into the future, and hospitals are likely to look very different again. These are some of the changes you're likely to see in years to come. Researchers at Johns Hopkins University are developing a surgical robot capable of performing surgeries fully autonomously. The robot is equipped with 3D vision and a machine learning algorithm that allows it to plan and adapt during a surgery.
- North America > United States > Texas > Bexar County > San Antonio (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.34)
AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation
The CEOs of the world's leading artificial intelligence companies, along with hundreds of other AI scientists and experts, made their most unified statement yet about the existential risks to humanity posed by the technology, in a short open letter released Tuesday. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the letter, released by California-based non-profit the Center for AI Safety, says in its entirety. The CEOs of what are widely seen as the three most cutting-edge AI labs--Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic--are all signatories to the letter. So is Geoffrey Hinton, a man widely acknowledged to be the "godfather of AI," who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity. Read More: DeepMind's CEO Helped Take AI Mainstream.
The biggest problem in AI? Lying chatbots
Companies are also spending time and money improving their models by testing them with real people. A technique called reinforcement learning with human feedback, where human testers manually improve a bot's answers and then feed them back into the system to improve it, is widely credited with making ChatGPT so much better than chatbots that came before it. A popular approach is to connect chatbots up to databases of factual or more trustworthy information, such as Wikipedia, Google search or bespoke collections of academic articles or business documents.
Effective as a collective: Researchers investigate the swarming behavior of microrobots
Researchers are looking for new ways to perform tasks on the micro- and nanoscale that are otherwise difficult to realize, particularly as the miniaturization of devices and components is beginning to reach physical limits. One new option being considered is the use of collectives of robotic units in place of a single robot to complete a task. "The task-solving capabilities of one microrobot are limited due to its small size," said Professor Thomas Speck, who headed the study at Mainz University. "But a collective of such robots working together may well be able to carry out complex assignments with considerable success." Statistical physics becomes relevant here in that it analyzes models to describe how such collective behavior may emerge from interactions, comparable to bird behavior when they flock together.
'I do not think ethical surveillance can exist': Rumman Chowdhury on accountability in AI
Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls "2am brain", a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. "It's just like baking," she says. "You can't force it, you can't turn the temperature up, you can't make it go faster. It will take however long it takes. And when it's done baking, it will present itself."
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Spain (0.05)
- Europe > Austria > Vienna (0.05)
Rishi Sunak races to tighten rules for AI amid fears of existential risk
Rishi Sunak is scrambling to update the government's approach to regulating artificial intelligence, amid warnings that the industry poses an existential risk to humanity unless countries radically change how they allow the technology to be developed. The prime minister and his officials are looking at ways to tighten the UK's regulation of cutting-edge technology, as industry figures warn the government's AI white paper, published just two months ago, is already out of date. Government sources have told the Guardian the prime minister is increasingly concerned about the risks posed by AI, only weeks after his chancellor, Jeremy Hunt, said he wanted the UK to "win the race" to develop the technology. Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator. Meanwhile Conservative and Labour MPs are calling on the prime minister to pass a separate bill that could create the UK's first AI-focused watchdog.
Why We Need to See Inside AI's Black Box
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. For some people, the term "black box" brings to mind the recording devices in airplanes that are valuable for postmortem analyses if the unthinkable happens. For others it evokes small, minimally outfitted theaters. But black box is also an important term in the world of artificial intelligence. AI black boxes refer to AI systems with internal workings that are invisible to the user.
Robots and Rights: Confucianism Offers Alternative
The analysis, by a researcher at Carnegie Mellon University (CMU), appears in Communications of the ACM, published by the Association for Computing Machinery. "People are worried about the risks of granting rights to robots," notes Tae Wan Kim, Associate Professor of Business Ethics at CMU's Tepper School of Business, who conducted the analysis. "Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers -- not a rights bearers -- could work better." Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self.